input function
- Asia (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- North America > Canada > Quebec > Montreal (0.04)
Learning Nonparametric Volterra Kernels with Gaussian Processes
This paper introduces a method for the nonparametric Bayesian learning of nonlinear operators, through the use of the Volterra series with kernels represented using Gaussian processes (GPs), which we term the nonparametric Volterra kernels model (NVKM). When the input function to the operator is unobserved and has a GP prior, the NVKM constitutes a powerful method for both single and multiple output regression, and can be viewed as a nonlinear and nonparametric latent force model. When the input function is observed, the NVKM can be used to perform Bayesian system identification. We use recent advances in efficient sampling of explicit functions from GPs to map process realisations through the Volterra series without resorting to numerical integration, allowing scalability through doubly stochastic variational inference, and avoiding the need for Gaussian approximations of the output processes. We demonstrate the performance of the model for both multiple output regression and system identification using standard benchmarks.
- Asia (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- North America > Canada > Quebec > Montreal (0.04)
Towards Universal Neural Operators through Multiphysics Pretraining
Masliaev, Mikhail, Gusarov, Dmitry, Markov, Ilya, Hvatov, Alexander
Although neural operators are widely used in data-driven physical simulations, their training remains computationally expensive. Recent advances address this issue via downstream learning, where a model pretrained on simpler problems is fine-tuned on more complex ones. In this research, we investigate transformer-based neural operators, which have previously been applied only to specific problems, in a more general transfer learning setting. We evaluate their performance across diverse PDE problems, including extrapolation to unseen parameters, incorporation of new variables, and transfer from multi-equation datasets. Our results demonstrate that advanced neural operator architectures can effectively transfer knowledge across PDE problems.
- Europe > Russia > Northwestern Federal District > Leningrad Oblast > Saint Petersburg (0.05)
- Asia > Russia (0.05)
A data free neural operator enabling fast inference of 2D and 3D Navier Stokes equations
Choi, Junho, Chang, Teng-Yuan, Kim, Namjung, Hong, Youngjoon
Ensemble simulations of high-dimensional flow models (e.g., Navier-Stokes-type PDEs) are computationally prohibitive for real-time appli cations. Neural operators enable fast inference but are limited by costly data req uirements and poor generalization to 3D flows. We present a data-free operator n etwork for the Navier-Stokes equations that eliminates the need for paire d solution data and enables robust, real-time inference for large ensemble for ecasting. The physics-grounded architecture takes initial and boundary conditio ns as well as forcing functions, yielding solutions robust to high variability a nd perturbations. Across 2D benchmarks and 3D test cases, the method surpasses prior n eural operators in accuracy and, for ensembles, achieves greater efficie ncy than conventional numerical solvers. Notably, it delivers accurate solutions of the three-dimensional Navier-Stokes equations--a regime not previously demonstr ated for data-free neural operators. By uniting a numerically grounded archit ecture with the scalability of machine learning, this approach establishes a pra ctical pathway toward data-free, high-fidelity PDE surrogates for end-to-end sci entific simulation and prediction. Solving PDEs efficiently and accurately is one of the central interests for scienc e and engineering. In addition, when dealing with various boundary conditions, initial con ditions, or external forcing terms of PDEs in fields such as fluid mechanics [1-3], materials science [4, 5], weather forecasting [6, 7], and design optimization [8, 9], P DEs are often required to be solved repeatedly. However, conventional numeric al solvers become prohibitively expensive in such settings, particularly for three-dimensional incompressible Navier-Stokes equations (NSEs) [10, 11]. This is because these s olvers rely on spatial-temporal discretization and iterative treatment of nonline ar terms, while performing time marching that demands substantial memory and computation. Moreover, they are not well suited for solving large ensembles of scenarios simu ltaneously, such as those required for uncertainty quantification or design explora tion. The resulting computational time, coupled with the need for extensive sampling in e nsemble or probabilistic simulations, constitutes a critical bottleneck [7, 12].
- Asia > South Korea > Seoul > Seoul (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Asia > South Korea > Daejeon > Daejeon (0.04)
SetONet: A Set-Based Operator Network for Solving PDEs with Variable-Input Sampling
Tretiakov, Stepan, Li, Xingjian, Kumar, Krishna
Neural operators, particularly the Deep Operator Network (DeepONet), have shown promise in learning mappings between function spaces for solving differential equations. However, standard DeepONet requires input functions to be sampled at fixed locations, limiting its applicability when sensor configurations vary or inputs exist on irregular grids. We introduce the Set Operator Network (SetONet), which modifies DeepONet's branch network to process input functions as unordered sets of location-value pairs. By incorporating Deep Sets principles, SetONet ensures permutation invariance while maintaining the same parameter count as the baseline. On classical operator-learning benchmarks, SetONet achieves parity with DeepONet on fixed layouts while sustaining accuracy under variable sensor configurations or sensor drop-off - conditions for which standard DeepONet is not applicable. More significantly, SetONet natively handles problems where inputs are naturally represented as unstructured point clouds (such as point sources or density samples) rather than values on fixed grids, a capability standard DeepONet lacks. On heat conduction with point sources, advection-diffusion modeling chemical plumes, and optimal transport between density samples, SetONet learns operators end-to-end without rasterization or multi-stage pipelines. These problems feature inputs that are naturally discrete point sets (point sources or density samples) rather than functions on fixed grids. SetONet is a DeepONet-class architecture that addresses such problems with a lightweight design, significantly broadening the applicability of operator learning to problems with variable, incomplete, or unstructured input data.
- North America > United States > Texas > Travis County > Austin (0.04)
- Europe > United Kingdom > North Sea > Southern North Sea (0.04)
- Europe > Portugal > Braga > Braga (0.04)
- Asia > India > Tripura (0.04)
Deep Neural ODE Operator Networks for PDEs
Li, Ziqian, Liu, Kang, Song, Yongcun, Yue, Hangrui, Zuazua, Enrique
Operator learning has emerged as a promising paradigm for developing efficient surrogate models to solve partial differential equations (PDEs). However, existing approaches often overlook the domain knowledge inherent in the underlying PDEs and hence suffer from challenges in capturing temporal dynamics and generalization issues beyond training time frames. This paper introduces a deep neural ordinary differential equation (ODE) operator network framework, termed NODE-ONet, to alleviate these limitations. The framework adopts an encoder-decoder architecture comprising three core components: an encoder that spatially discretizes input functions, a neural ODE capturing latent temporal dynamics, and a decoder reconstructing solutions in physical spaces. Theoretically, error analysis for the encoder-decoder architecture is investigated. Computationally, we propose novel physics-encoded neural ODEs to incorporate PDE-specific physical properties. Such well-designed neural ODEs significantly reduce the framework's complexity while enhancing numerical efficiency, robustness, applicability, and generalization capacity. Numerical experiments on nonlinear diffusion-reaction and Navier-Stokes equations demonstrate high accuracy, computational efficiency, and prediction capabilities beyond training time frames. Additionally, the framework's flexibility to accommodate diverse encoders/decoders and its ability to generalize across related PDE families further underscore its potential as a scalable, physics-encoded tool for scientific machine learning.
- Europe > Spain > Galicia > Madrid (0.04)
- North America > United States > Rhode Island > Providence County > Providence (0.04)
- Europe > Germany (0.04)
- (6 more...)
FAME: Adaptive Functional Attention with Expert Routing for Function-on-Function Regression
Gao, Yifei, Chen, Yong, Zhang, Chen
Functional data play a pivotal role across science and engineering, yet their infinite-dimensional nature makes representation learning challenging. Conventional statistical models depend on pre-chosen basis expansions or kernels, limiting the flexibility of data-driven discovery, while many deep-learning pipelines treat functions as fixed-grid vectors, ignoring inherent continuity. In this paper, we introduce Functional Attention with a Mixture-of-Experts (FAME), an end-to-end, fully data-driven framework for function-on-function regression. FAME forms continuous attention by coupling a bidirectional neural controlled differential equation with MoE-driven vector fields to capture intra-functional continuity, and further fuses change to inter-functional dependencies via multi-head cross attention. Extensive experiments on synthetic and real-world functional-regression benchmarks show that FAME achieves state-of-the-art accuracy, strong robustness to arbitrarily sampled discrete observations of functions.
- North America > United States > Hawaii (0.05)
- Asia > China > Beijing > Beijing (0.04)
- North America > United States > Iowa > Johnson County > Iowa City (0.04)
- Asia > Middle East > Republic of Türkiye > Karaman Province > Karaman (0.04)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.67)
- Information Technology > Data Science (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (0.93)
Deep set based operator learning with uncertainty quantification
Ma, Lei, Guo, Ling, Wu, Hao, Zhou, Tao
Learning operators from data is central to scientific machine learning. While DeepONets are widely used for their ability to handle complex domains, they require fixed sensor numbers and locations, lack mechanisms for uncertainty quantification (UQ), and are thus limited in practical applicability . Recent permutation-invariant extensions, such as the V ariable-Input Deep Operator Network (VIDON), relax these sensor constraints but still rely on sufficiently dense observations and cannot capture uncertainties arising from incomplete measurements or from operators with inherent randomness. T o address these challenges, we propose UQ-SONet, a permutation-invariant operator learning framework with built-in UQ. Our model integrates a set transformer embedding to handle sparse and variable sensor locations, and employs a conditional variational autoencoder (cV AE) to approximate the conditional distribution of the solution operator. By minimizing the negative ELBO, UQ-SONet provides principled uncertainty estimation while maintaining predictive accuracy . Numerical experiments on deterministic and stochastic PDEs, including the Navier-Stokes equation, demonstrate the robustness and effectiveness of the proposed framework. Introduction Learning continuous operators or complex systems from scattered data streams has shown promising success in scientific machine learning, which focuses on modeling mappings between infinite-dimensional function spaces. A prominent example is the Fourier Neural Operator (FNO) [1], which parameterizes the integral kernel in Fourier space. While highly effective for problems on domains that can be discretized or efficiently mapped to Cartesian grids, the applicability of FNO to more general settings remains limited [2].
- Research Report > Experimental Study (0.93)
- Research Report > New Finding (0.92)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.93)
- Information Technology > Sensing and Signal Processing > Image Processing (0.67)